4 - 20.1.4. Modeling Uncertainty [ID:29043]
50 von 208 angezeigt

Okay, so how do we model uncertainty? That's really the question here.

We've kind of given our basic infrastructure and said, well, yes, we need lots of possible worlds and we must entertain basically lots of hypotheses of what the world could be like.

Let's go back to the wumpus.

Okay, so this is a more realistic picture.

Of the wumpus world.

Lots of stuff we can't see. It's dark in the cave.

We've been to these three cells.

We know that there's a breeze here and there's a breeze there.

Okay, and the breeze tells us about something about these three cells, one three, two two, and three one.

And really the question for the agent is which one of these three should I explore next?

The thing to realize is they're all unsafe.

Because a breeze here says there or there is a pit.

A breeze here says there or there is a pit.

Where should i go next?

I don't know if it's one three or three one because here in my opinion the probability is less. A pit is here because only one pit is required for the breeze.

And if I give you ten euros if you go to the middle thing, would you do it?

Okay. What would i have to pay you?

Okay. You're right. On the one hand with the argumentation you're way ahead of our agents.

Because for our agents right now we only have sets of possible worlds.

And we have no way of distinguishing between them.

In a sense every element is created equal.

All of them are equally unsafe.

Okay. So what you're postulating is you're postulating something where we can measure of how much we believe that world is typical.

And that's exactly what we need.

And you've done something even more. You've done reasoning with these funny worlds with likelihood things.

But you've kind of been doing rule of thumb, follow my nose reasoning.

What you've not been able to do is kind of to quantify the risks.

If you know what your life is worth then you would have known and what the likelihood of dying into is you would have known how much to ask me.

And then add a factor of two so it's worth it and something like this.

So what we really want is two things.

One is we want to have a way of distinguishing between possible worlds in terms of the likelihood of them being a representation of the actual world.

And we want to do this in a quantifiable way.

Only then can our agents behave optimally.

And I've used ten euros here to actually simulate something like a utility function.

And we want to do these things and compare them to utilities.

And that is where we actually want to, why we need quantifiable models of likelihood of things, of possible worlds being the actual world.

That's what we want to arrive at.

Can we do that with the tools we already have?

Maybe we don't have to learn anything new, we can do it with logic.

I'm going to leave the Wampers world because we've already looked at it in logic.

I'm going to use another example I want to use as a running example, which is a dental diagnosis system.

You go to a doctor, dentist, and you have a toothache.

And then they do what they do, prod you with metal objects and so on.

And then at some point they say, oh yes, of course you have a cavity.

And now here let's look at how a doctor might reason.

So you want to know about symptoms and you want to know about diseases.

And diseases and linking symptoms and diseases is actually what diagnosis is.

So we might just say something like, if the symptom of P is toothache, P equals Peter, maybe something like that, then the disease is a cavity.

What do you think?

Is this rule correct?

Teil eines Kapitels:
Chapter 20. Quantifying Uncertainty

Zugänglich über

Offener Zugang

Dauer

00:28:00 Min

Aufnahmedatum

2021-01-28

Hochgeladen am

2021-02-01 10:56:08

Sprache

en-US

The Wumpus World example continued. Problems with formalizing uncertainty in logical formulas. How to get probabilities.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen